Round Overview
Discuss this match
We are given two numbers with the same number of digits. We want to know, do they differ by at most one digit. To do it, we will calculate number of positions where given numbers have different digits. Answer is positive if found number is not greater than `1`.
How to get digits of given numbers?
Last digit of number `x` is equal to `x % 10`.
`x // 10` is number `x` without last digit.
Then `(x // 10) % 10` is second last digit of number `x`.
You can check that `(x // 100)%10` and `(x // 1000)%10` are third and fourth last digits of `x` and so on.
In code below we create variable counter to count pairs of different digits. First, we compare last digits of given numbers - `A % 10` and `B % 10`. Then we cut those last digits by dividing `A` and `B` by `10`. And we must repeat these two steps until there are no more digits. Thanks to condition about the same number of digits, number `A` and `B` will run out of digits in the same moment and one loop is enough. Complexity is `O(\text{number of digits}) = O( \log A ) = O( \log B )`.
1
2
3
4
5
6
7
8
9
10
public static String eyesight(int a, int b) {
int counter = 0;
while (a > 0) { // a equal to 0 means that A has no more digits, so does B
if (a % 10 != b % 10) ++counter;
a /= 10;
b /= 10;
}
if (counter≤ 1) return "happy";
return "glasses";
}
State is triple of numbers - sizes of three piles. For given initial state of piles, we are asked to find out is it possible to make the piles equal. It’s impossible if total number of stones `S = A+B+C` is not divisible by `3`. Otherwise, we want to have `S // 3` stones in every pile.
With one recursive function we will simulate everything Limak can do. Starting from initial state, we try to pick every possible pair of unequal piles and move stones between them, calling our function for new state. Using some set structure with `O( \log )` complexity, we can store every state we processed (we store triples of numbers). There is no need to run our function twice for the same state - that’s why we need to store reached states (without it, our program could never terminate, being in infinite loop). Such a technique is called memoization. This solution will reach every state Limak can get and has complexity `O( D \log D )` where `D` is number of possible states. We need to run our function for initial pile sizes and check if set of reached states contains desired state with equal piles. But is number of possible states `D` small enough?
We have constraint that number of stones `S` is up to `1500` because `A,B,C` are up to `500`. First pile can have `O(S)` possible sizes. The same for second pile - `O(S)` possibilities. But for selected sizes of first two piles, third pile can have only one size so that sum of sizes would be exactly `S`. So there are at most `O(S^2)` different states and described solution is `O(S^2 \log (S^2)) = O(S^2 \log S)`. It’s enough to get AC. In general, there is `C(n-1, k-1)` ways to split `n` elements into `k` nonempty piles (it’s called binomial coefficient).
We can get rid of log factor in complexity. Instead of set, we can use two-dimensional array `can[][]`. We say `can[x][y]` is true iff we reached state with sizes `x, y, S-x-y`. It’s equivalent to keeping states in set but gives us `O(S^2)` complexity.
I encourage you to solve a bit different version of this task - Div1 Easy.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
static boolean[][] can = new boolean[1505][1505];
static void rec(int t[]) {
if (can[t[0]][t[1]]) return; // we processed this state before
can[t[0]][t[1]] = true;
for (int i = 0; i < 3; ++i)
for (int j = 0; j < 3; ++j)
if (t[i] < t[j]) {
// we move stones from j to i
// third pile has index 0+1+2-i-j
int[] t2 = {
t[i] * 2,
t[j] - t[i],
t[0 + 1 + 2 - i - j]
};
rec(t2);
}
}
public static String equalPiles(int a, int b, int c) {
int[] t = {
a,
b,
c
};
int s = a + b + c;
if (s % 3 == 0) {
rec(t);
if (can[s / 3][s / 3]) return "possible";
}
return "impossible";
}
You were given standard implementation of mergesort. Mergesort compares pairs of elements till order of everything is fixed. The important thing is that comparing two elements fixes their order (which one is smaller) for ever. To see why, you must analyze and understand this algorithm. Note: mergesort doesn’t do unnecessary comparisons so strange function `LESS(a,b)` won’t cause any crash. It compares two elements only if it’s impossible to deduce their order from previous comparisons.
We want to find probability of getting sequence `P`. For every comparison `LESS(a,b)` there is 50% that it returns correct verdict - the one giving order of `a` and `b` the same as in `P`. Incorrect verdict makes getting sequence `P` impossible. Let’s simulate our function for sequence `{1,2, \ldots, N}`. Every time there is call of function `LESS()` we assume that it gives correct verdict. Then it’s impossible to get wrong sequence (other than `P`) - as we said, mergesort terminates only when the order is fixed (known). Finally, program will terminate giving us sequence `P`. Probability for this scenario is `0.5^k` where `k` is number of calls of `LESS()` - we need correct returned value in every call.
So, solution is to implement given pseudocode with `LESS()` returning what we want (order as in `P`). To do it, we must remember in array `whereIs[x]` where in `P` should be value `x`. We must count calls of `LESS` and return ` \log ( 0.5 ^ k ) = k \cdot \log ( 0.5 )`. Complexity is equal to complexity of mergesort, `O(n \log n)`.
In case of doubts “is it always possible to get `P`?” - notice that sorting can change every permutation into sorted one. So, for every mapping (for every description “what should go where”) there exists a sequence of answers of function `LESS()` giving demanded result.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
int T[42], whereIs[42];
int counter;
bool LESS(int a, int b) {
++counter;
return whereIs[a] < whereIs[b];
}
void mergeSort(int left, int right) {
if (left + 1≥ right) return;
int mid = (left + right) / 2;
mergeSort(left, mid);
mergeSort(mid, right);
int p1 = left, p2 = mid;
vector < int > merged;
while (p1 < mid || p2 < right) {
if (p1 == mid) merged.pb(T[p2++]);
else if (p2 == right) merged.pb(T[p1++]);
else {
if (LESS(T[p1], T[p2]))
merged.pb(T[p1++]);
else
merged.pb(T[p2++]);
}
}
for (int i = left; i < right; ++i)
T[i] = merged[i - left];
}
double getProbability(vector < int > P) {
int n = P.size();
for (int i = 0; i < n; ++i) {
whereIs[P[i]] = i;
T[i] = i + 1;
}
mergeSort(0, n);
return log(0.5) * counter;
}
Let `S` denote total number of stones, `S = A + B`.
We need one crucial observation to solve this task.
If we have pile with `x` stones (other one is `y = S-x`), after one move this pile will have `2x%S` stones.
It’s obvious if `x ≤ y` because then Limak will double this pile and it will have `2x` stones.
For `x > y` this pile will have `x-y = x+x-x-y = 2x-S \equiv 2x \mod S` stones.
It means that we can multiply size of one pile by `2` and not care which pile has greater size.
Now we’ll consider something like corner case. What we get by multiplying, is `size’ % S`. But we want to get `size’`. There is one case when these two values differ - when `size’ = S` (then we have `size’ % S = 0 \ne size’`). It happens only when piles had equal sizes before a move. It turns out that then we can simplify things and say that size’ is `0` - we can say “this pile is empty and second one has `S` stones”.
Ok, let’s solve a task. Piles have sizes `A` and `B`.
After one move first pile has `A_1 = 2 A % S` stones where `S = A+B`.
After next move first pile has `A_2 = 2 A_1 % S = 2 \cdot (2 A % S) % S = 4 A % S`.
After three moves first pile has `A_3 = 8 A % S`.
After `K` moves first pile has `A_K = 2^K \cdot A % S` stones.
We can calculate `2^K` in `O( \log K)` with fast exponentiation. Aftrer all moves, first pile has `A_K` stones, so second pile has `S-A_K = A+B-A_K` stones. We need to return minimum of these two values. Don’t forget about a possible overflow.
1
2
3
4
5
6
7
8
9
10
11
12
13
public static int pileSize(int A, int B, int K) {
int mod = A + B;
long res = 1;
long two = 2;
while (K > 0) {
if (K % 2 == 1) res = res * two % mod;
two = two * two % mod;
K /= 2;
}
// we calculated 2^K
res = res * A % mod;
return Math.min((int) res, A + B - (int) res);
}
Let CC denote connected component (described as a region in the statement). For every vertex in a tree we are given probability of it being destroyed. We want to calculate expected value of sum of squares of sizes of CCs. Instead, let’s calculate expected value of number of ordered pairs of (not necessarily distinct) vertices being in the same CC. We can do it because there are `k^2` ordered pairs of vertices in CC with size `k`. So square of size of CC is equal to number of ordered pairs of vertices in this CC. Note: `(a,b)` and `(b,a)` are different ordered pairs for `a \ne b`.
We want to find expected value of number of ordered pairs in the same CC. Linearity of expectation says that it’s equal to sum of probabilities of being in the same CC for all ordered pairs.
Two language clarifications. First, “probability of path” will be short for “probability that nothing on path is destroyed”. Second, “subtree `v`” means node `v` and all descendants of `v`.
Two vertices are in the same CC iff everything on the path between them (including themselves) is not destroyed. So we want to sum up probabilities of paths between every two ordered vertices (from every `i` to every `j`). Let `dp[v]` be sum of probabilities of paths from `v` to subtree `v`, including path from `v` to `v` (one-node-path). We will do bottom-up dynamic programming for given tree rooted in vertex `0`.
Initially, we fill `dp[]` with probabilities of one-node-paths, equal to `prob[v] = 1 // (v+1)`. Modulo value `mod = 10^9+7` is prime, so we can calculate `(v+1)^{-1} % mod` with fast exponentiation by powering `(v+1)^{mod-2}`. In order bottom-up we iterate over each vertex `v` and calculate `dp[v]`. Let’s say we have `dp[c]` calculated for every `v`'s child `c`. Paths from `c` to subtree `c` give us paths from `v` to subtree `c` iff vertex `v` is not destroyed and it happens with probability `prob[v]`. So sum of probabilities should be multiplied by `prob[v]`. Hence, we need to add `dp[c] \cdot prob[v]` to `dp[v]`.
We calculated `dp[]` values. Finally, let’s obtain the result from them. For vertex `v` and his child `c`:
`prob[v]` is probability of one-node path from `v` to `v`.
`dp[c] \cdot prob[v]` is sum of paths from [`v`] to [subtree `c`] (yes, we used this multiplication before).
`dp[c] \cdot (dp[v] - dp[c] \cdot prob[v])` is sum of paths from [subtree `c`] to [subtree `v` excluding subtree `c`].
These three cases cover every ordered pair of vertices with LCA in vertex `v`. We must sum it up and return result multiplied by `N!`. In implementation you can use fact that parent has always lower index than child because of input generation - it means that bottom-up order is iterating from `N-1` to `0`.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
final static int mod = 1000 * 1000 * 1000 + 7;
static int[] parent;
static long[] prob, dp;
static void findInversions(int N) { // prob[i] = (i+1)^(mod-2)
prob = new long[N];
for (int i = 0; i < N; ++i) {
prob[i] = 1;
long a = i + 1;
int b = mod - 2;
while (b > 0) {
if (b % 2 == 1) prob[i] = prob[i] * a % mod;
a = a * a % mod;
b /= 2;
}
}
}
public static int expectedValue(int N, int R, int A, int B, int M, int LOW, int HIGH) {
findInversions(N);
parent = new int[N];
final int BIL = 1000 * 1000 * 1000;
for (int i = 1; i < N; i++) {
R = (int)(((long) A * R + B) % M);
long MIN = (long)(i - 1) * LOW / BIL;
long MAX = (long)(i - 1) * HIGH / BIL;
parent[i] = (int)(MIN + R % (MAX - MIN + 1));
}
long res = 0;
// calculate dp[]
dp = new long[N];
for (int i = 0; i < N; ++i)
dp[i] = prob[i];
for (int i = N - 1; i≥ 1; --i)
dp[parent[i]] = (dp[parent[i]] + dp[i] * prob[parent[i]]) % mod;
// sum up three cases
for (int i = 0; i < N; ++i) res = (res + prob[i]) % mod;
for (int i = 1; i < N; ++i) {
res += dp[i] * prob[prent[i]];
res += dp[i] * (dp[parent[i]] - dp[i] * prob[parent[i]] % mod + mod); // be careful when substracting modulo
res %= mod;
}
// multiply by N!
for (int i = 1; i≤ N; ++i)
res = res * i % mod;
return (int) res;
}
You can check out alternative final summation below. We can sum up `dp[c] \cdot dp[v]` when `dp[v]` is calculated partially - when we haven’t considered all `v`'s children. Think why it covers all cases.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public static int expectedValue(int N, int R, int A, int B, int M, int LOW, int HIGH) {
[...]
long res = 0;
// calculate dp[] and result at the same time
dp = new long[N];
for (int i = 0; i < N; ++i) {
dp[i] = prob[i];
res = (res + prob[i]) % mod;
}
for (int i = N - 1; i≥ 1; --i) {
res = (res + 2 * dp[i] * dp[parent[i]]) % mod;
dp[parent[i]] = (dp[parent[i]] + dp[i] * prob[parent[i]]) % mod;
}
// multiply by N!
for (int i = 1; i≤ N; ++i)
res = res * i % mod;
return (int) res;
}
First, we must understand what is probability of getting some sequence. It is equal to `0.5^k` where `k` is number of comparisons made by mergesort to get this sequence. To see why, you can check out Div2 Hard editorial. From now on, calculating/specified/given probability will mean calculating/specified/given number of comparisons. Fortunately, we don’t have to care about precision issues because we can keep probabilities as integers. And we will use a fact that mergesort does at most `n \log n` comparisons (otherwise, mergesort would have bigger complexity).
We are given permutation `P` and asked to count permutations with greater probability and those lexicographically smaller with the same probability. To do second part (btw. it will do first part too), we need a function `\text{forPrefix(pref, probability)}` which calculates number of permutations starting with specified prefix and probability. It turns out, that to make whole solution efficient, it’s better to have function `\text{forPrefix(pref)}` which calculates number of permutations for every possible probability. If you don’t see how to use this function to solve a task, check it in code below. Here we will focus on writing `\text{forPrefix(pref)}` function which will be called `O(N^2)` times. We will write this function in `O(N^2 \log ^2 N)` to get total complexity `O(N^4 \log ^2 N)`.
Mergesort with random `LESS()` returns permutation with prefix `\text{pref}` iff the following is true. If at least one of compared numbers `a,b` is in `\text{pref}`, then `LESS(a,b)` returns specified value, not leading to contradiction. If only `a` is in `\text{pref}`, it returns true. If only `b` is in `\text{pref}`, false. If both `a` and `b` are in `\text{pref}`, returned value must depend on “`a` is before `b` in `\text{pref}`”. Otherwise, it could return any value.
Function `\text{forPrefix(pref)}` will run mergesort simulation. In every recursive call of mergesort we merge two arrays, sorted previously (recursively). These two arrays are guaranteed to have all elements from `\text{pref}` as their prefixes (let’s call them special prefixes). After merging we want to again produce array with all elements from `\text{pref}` before other elements. First, we merge special prefixes in one way, specified by order of numbers in `\text{pref}`. Then we must merge other elements in every possible way.
When merging, one array will run out of numbers first and then we get some skipped suffix of the other array. In true mergesort such a suffix contains numbers bigger than all numbers in the other array. Let’s say we both arrays have sizes `n1` and `n2` and special prefixes `p1` and `p2`. Let’s say we skip suffix of size `x` in first array. Then there is `C(n1-p1-x, n2-p2)` ways (binomial coefficient) to merge arrays (we need to fix order of elements from middles). And we have `n1+n2-x` comparisons. For both arrays we should iterate over size `x` of skipped suffix.
We can treat every resursive call of mergesort separately, because order of elements in halves doesn’t affect merging them. Every call produces some array of numbers of ways to achieve some probabilities, in code below I keep it globally in vector `RES`. We must multiply all those arrays like in FFT but we can do it slower. Total size of those arrays is limited by total number of comparisons which is `O( N \log N )`, so indeed we can do it with brute force with total complexity `O(N^2 \log ^2 N)`.
About implementation below. `RES` is result of multiplied arrays of numbers of ways to achieve probabilities. `\text{consider(vector)}` adds something to `RES`.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
#include<bits/stdc++.h>
using namespace std;
#define FOR(i, a, b) for (int i = (a); i≤(b); ++i)
typedef long long ll;
const int mod = 1e9 + 7;
vector < int > RES;
void consider(vector < int > w) { // equivalent to FFT
assert(w.back() > 0 && w[0] == 0);
int memo_size = (int) RES.size();
RES.resize(memo_size + (int) w.size() - 1, 0);
for (int i = memo_size - 1; i≥ 0; --i) {
FOR(j, 0, (int) w.size() - 1)
RES[i + j] = (RES[i + j] + (ll) RES[i] * w[j]) % mod;
RES[i] = 0;
}
}
const int nax = 105;
int binom[nax][nax];
bool smaller[nax][nax];
void mergeSort(int a, int d) { // numbers in [a, d], inclusive
if (a == d) return;
int n = d - a + 1;
int c = a + n / 2;
int b = c - 1;
mergeSort(a, b);
mergeSort(c, d);
vector < int > L, R;
FOR(i, a, b) L.push_back(i);
FOR(i, c, d) R.push_back(i);
sort(L.begin(), L.end(), [](int a, int b) {
return smaller[a][b];
});
sort(R.begin(), R.end(), [](int a, int b) {
return smaller[a][b];
});
int iL = 0, iR = 0;
int r = 0;
while (iL < (int) L.size() && iR < (int) R.size() &&
(smaller[L[iL]][R[iR]] || smaller[R[iR]][L[iL]])) {
++r;
if (smaller[L[iL]][R[iR]]) ++iL;
else ++iR;
}
if (iL≥(int) L.size() || iR≥(int) R.size()) {
vector < int > w(r + 1, 0);
w[r] = 1;
consider(w);
return;
}
vector < int > w;
int n1 = (int) L.size() - iL, n2 = (int) R.size() - iR;
FOR(repeat, 0, 1) {
// let's say we'll run out of numbers in L, i.e. the biggest number is in R
// let's say there are x numbers in R bigger than everything in L
FOR(x, 1, n2) {
int one = n1 - 1; // everything but the biggest one
int two = n2 - x;
while ((int) w.size()≤ n - x) w.push_back(0);
w[n - x] += binom[one + two][one];
}
swap(n1, n2); // to avoid copy-paste
}
consider(w);
}
// for given prefix (e.g. {2,5,1,6,0,0}) calculates
// global vector RES with number of ways to achieve each probability
void forPrefix(vector < int > pref) {
int n = (int) pref.size();
FOR(i, 1, n) FOR(j, 1, n) smaller[i][j] = false;
FOR(i, 0, n - 1) if (pref[i]) {
int a = pref[i];
FOR(b, 1, n) if (a != b && !smaller[b][a]) smaller[a][b] = true;
}
RES = vector < int > ({
1
});
mergeSort(1, n);
}
int index(vector < int > w) {
FOR(i, 0, nax - 1) binom[i][0] = 1;
FOR(i, 0, nax - 1) FOR(j, 1, i) binom[i][j] = (binom[i - 1][j - 1] + binom[i - 1][j]) % mod;
forPrefix(w);
int me = (int) RES.size() - 1;
int result = 1;
// with greater probability (smaller number of comparisons)
forPrefix(vector < int > ((int) w.size(), 0)); // empty prefix, vector with N zeros
FOR(i, 0, me - 1) result = (result + RES[i]) % mod;
// with the same probability, harder part
for (int i = (int) w.size() - 1; i≥ 0; --i)
while (w[i]) {
w[i]--;
if (w[i] == 0) continue;
bool ok = true;
FOR(j, 0, i - 1) if (w[j] == w[i]) ok = false;
if (!ok) continue;
forPrefix(w);
if (me < (int) RES.size()) result = (result + RES[me]) % mod;
}
return result;
}